ISSN 0281-9864 The Idealized World A Problem of Semantics Bernhard Bierschenk Lund University Sweden KOGNITIONSVETENSKAPLIG FORSKNING Cognitive Science Research The Idealized World A Problem of Semantics Bernhard Bierschenk 1990 No. 31 Communications should be sent to: Cognitive Science Research Paradisgatan 5 Lund University S-223 50 Lund, Sweden Coordinator: Bernhard Bierschenk Department of Psychology Abstract In a number of preceding articles having appeared in this series it has been demonstrated that the AaO paradigm can be used to develop a cog- nitive helical structure. The present article advances the process to the forth of five phases. Of the four dependent variables emerging at this stage, two will be discussed, namely (1) zero processing and (2) varia- tions in form. The other two will be presented in a forthcoming article. A short introduction gives some of the basics that govern the process. The discussion of the variables begins with some arguments put forward within cognitive science. In agreement with the general convic- tion, symbol processing results in strings of symbols or sentences on which operations such as addition, deletion and insertion can be per- formed according to formally defined rules. By propositional knowledge statements about the world and by truth conditions, meaning of symbolic expressions is defined. But meaning, i.e. semantics, is not bound to a linguistic context of interpretation. As comprehensive term it denotes the "knowledge system" underlying various approaches to natural language processing. A system's semantics represents the constructor's world view, but also a theory of how it shall be organized to enable disam- biguation of strings of symbols. As a result, the conceptual base is constructed within formally dif- ferent boundaries compared to the linguistic base. Thus, the semantic principle cuts across the linguistic — non-linguistic dimension. The prin- ciple implications of a semantic approach to interpretation and the pro- cessing of a conceptual base are discussed within the context of the de- gree of linguistic closeness of the meta-language used. The AaO Paradigm It is beyond doubt that ideas about cognitive functions get their representation in linguistic form when natural language is made the medium through which a cognitive function shall be read. In a series of articles (Bierschenk, 1984 a, b; 1986) it has been demonstrated how the Agent-action-Objective (AaO) paradigm when put into a formalism, con- trols the cognitive processes of differentiation and integration. Thus, this paradigm constitutes the basis for a synthesizing of successive segments in the development of a cognitive helical structure. Every novel phase in this development starts with a twist. Therefore, the ma- nipulate factors of preceding phases, can only secondarily influence the cooperation of the factors to be discussed. The mechanism governing the information synthesis allows the fol- lowing five basic activities to be carried out: (1) fixating the component to which the value (-) is bound, (2) binding the value (-) right adjusted, (3) mobilizing the component to which the value (+) is bound, (4) binding the value (+) right adjusted, (5) supplementing for place holders by transference of letters or letter combinations from one segment to the next ensuring the preservation of identity. All pairings possible are ( — , -+, +- , ++) and the change of information can be studied except for the first combination of signs. In this devel- opment, the forth phase is characterized by a cooperation of the manip- ulable factors Type and Function. A fixation of both means stipulating the zero hypothesis of cognitive processing, i.e. a "word concept" or label is present, while the mobilization of both factors implies maximal information synthesis leading to the emergence of knowing. What oper- ates in this new phase are variations in form (+-) and variations in structure (-+). In this respect the relations ( — , ++) and (-+, +-) are complementary to each other . This double asymmetry gives every pair a certain control over the development of the other. The asymmetrical pairs constitute the mechanism for the developmental control over con- ceptual differentiation and integration. Zero Processing by Fixating the Type and Function Factor In linguistic theory, grammar only partially solves the problem of explaining how a speaker of a language understands and produces sentences, especially his ability to produce novel sentences. In linguistic theory, semantics takes over where grammar leaves off. The basic domain of semantics for synchronic description of language was discussed within the framework of transformational grammar by Katz and Fodor (1964). While grammatical theory concentrates on syntactic ambiguity, se- mantic theory shall explain primarily four types of semantic ambiguity: (1) how to determine different readings of a sentence (2) how to determine the way by which a part of a sentence disambiguates the reading of another part (3) how to distinguish anomalies from semantic regularities (4) how to determine the non-grammatical rules by which paraphrasing rests. The theoretical question concerns the kind of conceptual base needed, i.e. the contextual reference. 77?e lexicon as conceptual base. Katz and Fodor reason around a non-linguistic and a linguistic context ("setting") and end up in favour of the latter. The main reasons are the following. Although a "complete" theory would be more powerful than a theory of semantic interpretation, the latter is to prefer. The authors (1964, p. 488) argue: "Since the readings that a speaker gives a sentence in setting are a selection from those the sentence has in isolation, a theory of semantic interpretation is logically prior to a theory of selective effect of set- ting." In other words, only a universal or total model is accepted, and since in the construction of a setting it is impossible to take all non-linguistic knowledge into account, only a linguistic context will do. The next step, then, would be to define the contextual range for interpretation. Here the authors have to dismiss the range of discourse, since, virtually any information about the world shared by speakers in interpreting dis- course would be needed in understanding, which makes it similar to a non-linguistic setting. Consequently, if the interpretation can be deter- mined to grammatical and semantic relations only, which obtain within and between sentences in a discourse, the range would as well be a sentence. The argument for this is that a majority of sentences in dis- course are connected by and-conjunctions. This appears to be an as- tonishingly weak argument, especially as the authors do not define whether discourse nor sentence. Semantic Foundation of a Lexicon The semantic component consists of a dictionary and projection rules, which operate on full grammatical descriptions of sentences and dictionary entries. A crucial problem is that the dictionary supplies more senses to an item than it bears in an occurrence, which means that the projection rules must select the appropriate sense. Their operating is governed by the world view. In the theory of Katz and Fodor this means that they work according to compositional ity. A projection rule matches semantic markers onto a sentence according to the conditions specified partly through grammatical markers and partly through the subpaths the process takes in processing a sentence. The dictionary is the primary component in interpretation. The requirement on a theory of language to incorporate lexicological aspects are obvious from modern linguistics. The "lexicon" is the theo- retical concept denoting knowledge about the set of morphological items of a language together with rules for their systematic combination and resulting properties. The result of lexical operation is a dictionary of words in a language. In simulations of understanding by computers, the implicit lexical knowledge of a speaker has to be specified for the com- puter to operate on. In this connection, semantics is used as frame of reference for assigning meaning (the world view) to the grammatical el- ements. The purpose of the discussion of Katz and Fodor (1964) was to explain the semantic principle in lexical specification which could be of relevance for computational approaches to understanding. Therefore, their notion "dictionary" must be taken to mean "lexicon". One main difference between an ordinary lexical entry and a seman- ticaily marked is that its "senses" are distinguished by mutually exclu- sive selection rules. This can be illustrated with the branching of the entry "bachelor": bachelor — > noun — >( Animal) — XMale) — > Young fur seal bachelor — > noun — >(Human) — > Person holding lowest degree bachelor — > noun — >(Human) — >(Maie) — > not married bachelor — > noun — >(Human) — XMale) — > Young knight The first information is grammatical, semantics is marked within paren- thesis, and the senses, termed "distinguishers" are given after the final arrow. One problem immediately presents itself. The last two paths il- lustrate a polyadic branching, which means that the item is ambiguous. The selection operates only on semantic markers, so there is no way for a lexicon containing this information to solve the difference. Katz and Fodor (1964, p. 499) propose a solution in the following way. If a speaker can understand a sentence like "The old bachelor finally died" as unambiguous, this would imply that Young can be marker instead of distinguisher. One problem that the authors seem to have faced is that ambiguity is built into the marker system which results in a confusion of marker and representation. To add markers is to "increase the preci- sion", according to the authors, but "preciseness" is not defined. Fur- ther, this increase of markers is effective up to a point where the fea- tures are an exact reflection of the "semantic structure of the language" (p. 500), i.e. an ideal situation is obtained. It is more correct to say that the markers are a means to approximate what might be ideal. An important consideration concerns the decision on what morpho- logical unit describes the largest amount of semantic information. In a theory of compositional ity, each word is decomposable into a set of se- mantic primitives, morphemes whose meaning additively make up the meaning of the word. In deciding upon entries this decompositional function is used to single out base words from those compositions in which morphemes do not add meaning to the base word which is not al- ready semanticaliy marked. These are the grammatical morphemes. The meaning bearing morphemes are defined by the categories in the repre- sentation system. The categories are non-arbitrary and have clear boundaries. With this assumption of objective knowledge necessary and sufficient conditions are thought to exist providing for compositional computation of the meaning of the complex words and sentences. Al- though computer programming has shown the difficulty in representing meaning from compositional rules, this view largely dominates in both theoretical work and in artificial intelligence applications. Moreover, it is not proved that by understanding a sentence every word is encoded at all (Erickson & Mattson, 1981). The semantic principle used to compute the linguistic meaning of a word presupposes a decomposition of the word into features under the hypothesis that on the basis of those features or propositions about them, a concept (category) can be processed which underlies the word. This word-concept approach is taken by Quillian to construct expanding semantic spheres. Expanding Semantic Spheres Quillian's (1968) "Teachable Language Comprehender", as discussed in Bierschenk (1986), rests on a conceptual base called semantic net, which represents Quillian's intuitive understanding of how language might be represented. The net is composed of two types of nodes and two types of links. Representing an attribute requires that something variable can be given a name and that it can be assigned a value within a certain given range of variation. By a description of the clause within a semantic-logic frame words and classes of words are renamed to prop- erties which, according to the propositional coding is expressed as p(X) interpreted as object (X) has the property (p). The relations between different facts in the net make possible a computation of values that are associated with concepts. As mentioned, the model rests on two types of links. The links pointing outwards relate a class (e.g. Canary) to an- other class (e.g. Animal). Inward pointing links always points to nodes that are part in a concept representation. By alternately processing two nodes it is possible to form an expanding sphere of nodes around any two starting nodes chosen. The technique implies fact finding and transforms every word concept to the searcher of one another. A con- cept is computed when the process has run through a maximal number of nodes making up a certain concept. Thus, Quill ian has developed a Node-Net-Intersection Finder, which searches through different kinds of lists made up by morphemes whose inflections have been deleted. The morphemes, that is, the base words are somewhat misleadingly called "canonical forms" (Quillian, 1968, p. 232). By a hierarchic organization of the nodes levels have been formed in order to simulate levels of ab- straction in a person's computation of different values. The hypothesis is that a person expresses himself on the basis of concepts that are or- ganized in levels. Expressing oneself means a propositional composition corresponding to a path through the net. The conception that natural language understanding proceeds this way would require that an indi- vidual builds up huge organizations of explicitly stored and semantically defined facts. Quillian makes the theoretical assumption that the semantic net has relevance to cognition and refers to the work of, for example, Bartlett and Piaget. It is difficult to figure out from Quillian's presentation how the Schema-hypothesis of Bartlett or Piaget could be adequately related to an approach that presupposes denotative data analysis and composi- tion. According to him, a concept is namely made up of a "bundle of properties" (p. 218) which would mean that it is defined by features and associations between them. From this point of view a concept is formless and structureless requiring only a logical algorithm for its calculation. A Schema, on the other hand, clearly demands a structural formulation (McCloskey & Glucksberg, 1978). How humans build up semantic concepts has been made an empirical question. In answering it, Collins and Quillian (1969) have tested the network hypothesis on the basis of propositional knowledge formulations. The hypothesis thus builds on the assumption that an experimental subject can know that a "canary has skin" first when the subject knows that " animals have skin" and that "a canary is an animal". The criterion for testing these conditions is the Reaction Time (RT) measure. It takes more time to answer a proposition with true-false statements for which the type of nodes (Canary) and class nodes (Animal) are found at different levels. By measuring RT in subject's judgment of propositions it is proposed that the conceptual base is isomorphic to a hierarchically organized net of the knowledge world when RT varies between levels. In proposing the semantic net as a cognitive device necessary for recognizing semantic concepts Quillian as well as a whole generation af- ter him seem to believe: (1) that conceptualizing is a general process, irrespective of which language or individual is concerned, and (2) that some relationships of one small facet of language would be the same for any other facet. Human cognition requires the specification of the agents or human be- ings to be symbolized and programs which, at least approximately, can express behavioural science statements on how persons act or interact. Thus Quillian would be urged to express genuine strategic alternatives of conceptualizing and relate this model to some relevant theories of be- havioural science, and, finally, give evidence that the necessary data for the foundation of his model exist in reality. To simulate a conceptualizing process based on intralinguistic prin- ciples, e.g., as organized in a semantic net, would require a denotation of a complex set of linguistic relations holding in a certain language, even if one, as Quillian does, choses a restricted facet of that language. Since a theory of linguistic completeness is not possible (despite Katz' and Fodor's assertion), a net of semantic concepts becomes a computa- tional artefact. From an ecological perspective, a semantic network represents a collection of frozen symbols, in which the many discontinuous variables necessary for a conceptualizing process to be performed are missing. Conceptualizing seems to be analogous with directedness and retrieval, i.e. with typical computer behaviour. There is no anchorage in be- havioural models or theoretical notions from behavioural science. This computationally anchored belief of cognition provides no links for test- ing substantial hypotheses about cognitive behaviour, i.e. recognition. It is not even possible to test the sensitivity of the model. The Semantic Illusion Quillian's Node-Net-Intersection Finder presupposes a hierarchy in the process and a constant increase of the time factor for the access to 10 a defined attribute at a given level. The assumption has been tested by Conrad (1972) who repeated the Collins and Quillian experiment by con- trolling their semantic net with respect to the relative frequency by which different attributes have been associated with a given word name. Linearity could be proved when the RT pattern was controlled based on a normatively created experimental material and was covaried with high frequency and low frequency attributes respectively. With an even higher differentiation in the RT pattern, as for exam- ple, by scaling and clustering defined attributes of a word (Rips, Shoben & Smith, 1973), the proposal of a semantic net as a cognitive mechanism becomes even more doubtful. A somewhat different approach to the study of conceptual semantics in nets is the investigation of Glass and Holyoak (1974). In their study it is hypothesized that relations of similarity and non-similarity are stored. This model assumes that word meanings are represented through a hierarchy of markers where markers of lower order are associated with paths to the dominating ones. Membership within a category is de- cided through a scaling of similarity relations between concepts. Simi- larity relations are assumed to represent class-inclusion among concepts and classes. Two links with the same label, for example, denote two sub- sets representing mutually exclusive classes, whereas two subsets not excluding one another get different labels. The way of organizing se- mantic nets makes it possible to introduce a priori meta-knowledge. RT measures can then be used to investigate simple class memberships ex- pressed in propositions like "All robins are birds", which, according to the predictions of the model, would be faster verified than "All robins are animals". The RT measures are assumed to reflect the order in ac- cessibility of semantic information, that is, relations of class membership are processed as information instead of groups of characteristic fea- tures, as in the earlier experiments. The RT pattern is used for making inferences about the organization of the categories. The power of the model in prediction is judged through correlating the RT measures with so called production norms. McCloskey and Glucksberg (1978) by their repeated experiment show, among other things, that the Glass and Holyoak model seems to be superior to elementwise comparison, the requirement of Quiilian's model. 1 1 Their experimental results do not support a computation of class-inclu- sion. Quillian's semantic net of word senses was presented as a typical example of semantics used for interpretation in a linguistic context. In the next section a system will be presented in which the semantics is not linguistically based and whose syntactic depth has been adapted ac- cordingly. Variations in Form by Mobilizing the Type Factor Abelson's (1973) purpose is to simulate a conceptual base for affec- tively influenced cognition. The maximal unit for the operation of such a base is a sentence and there is an explicit reference to the hierarchical S — > NP + VP type of syntactic organization. However, the relevance of this model for the disambiguation procedure is not stated. The mor- phemes of interest are lexical words belonging to the main classes noun, verb, and adjective (modifying both). But the level of abstraction is be- yond the word-concept relationship, which implies that class boundaries for words are streched. Important for the disambiguation procedure is, therefore, to distinguish word groups in terms of their syntactic func- tion. For example, the strings "attacked" and "made an attack" shall both be encoded as verb. As a consequence, some detection mechanism for determine a subsequent noun group must apply. By indicating a syntactic model for determining the relationship between the noun groups the subject-object relation is judged to carry meaningful infor- mation in the system. Varying Actors and Events in Social Scenarios The subject-object relation is mapped onto a social paradigm of Ac- tor-action-Objective type. The roles are held by socio-political concepts, represented by proper nouns such as "The Super Powers", " Latin America", "Communism" and by actions such as "Physical Attack" against a domain of political events. The semantic markers are category names developed for nouns and verbs. These are used to classify entries and to relate them to each other by similarity attribution carried by adjec- 12 tives. In accordance with the knowledge domain, this lexicon may contain both English words and encyclopedic entries represented by English words. The principle of attribution of socio-political value is applied ac- cording to the same compositional rules as for linguistic interpretation. The meaning of a noun (Soviet Union) is composed of adjective at- tributes (friendly), which in turn get their meaning from its head noun in the composition. This Ideational meaning must be in cognitive balance with the rest of the sentence. Thus the projection rules operate with selection based on a theory of semantic possibilities. The result of a disambiguation has to be a combination of semantic markers which pro- duce so called "generic events" like "Communist Nation Physical Attack Neutral Country". When this composition is projected onto a natural sentence like "The Soviet Union could invade Afghanistan" the input sentence is interpreted as possible, or true from the Ideational point of view. Through the composition of semantic markers for verbs such as "wanting", "hurting", and "preventing" so called episodes can be set up describing the temporal conditions of a plan. This construction requires the range of discourse, which is used as lexical information. A developed action plan is a frame characterizing an Actor and, therefore, the knowl- edge of such a well-defined sequence can help in judging "novel" events. An elaboration of the semantics has been introduced and described by diagraming conceptual dependencies. Conceptual dependency on dif- ferent levels provides a categorisation in depth regarding social depen- dencies, that is, it becomes possible to differentiate between actors de- pending on other actors (becoming agents) and to interpret causes and effects in social settings. This kind of dependency is illustrated in Fig- ure 1: E 44 propose — offer Q t F w act i on B Figure 1. Causes and Effects in Social Settings 13 The graph in Figure 1 may be explained as "E(Actor) proposes by some offer (serving as Instrument) that F( Agent) shall perform the action B". A consequence of this kind of dependency statements is that the set of semantic markers is extended with classes of relevance to social psy- chology. Thus interpersonal relations pertaining to dimensions such as "evaluation of others" and "influence on others" are represented. It is easy to see the transformed similarity with linguistic dimensions. The Implication Formula An event that can be observed can also be specified. This is the basic assumption underlying Abelson's (1973) theory of social cognition. "Event" is here used in Carnap's (1945) sense as "event of certain kind", which implies that a theory of events naturally belonging to- gether may be developed. Such a theory is Heider's balance theory. It assumes a balance or symmetry model represented by a triangle. A bal- ance of tension is the normal state and computed through a multiplica- tion of plus and minus signs. Plus denotes balance and minus imbalance. Imbalance is presumed to impose a pressure on the system in the direc- tion of balance. Against this background, Abelson makes the basic pre- sumption that tension between persons (actors) or political systems emerges as a result of imbalance in their valuation of events. In "The Structure of Belief Systems" he discusses the prerequisite of an "ideology machine", a simulator which would be capable of producing possible (instead of probable) political purposes, actions, plans etc (e.g., in The Cold War) with differences in symbolic systems between Eastern and Western ideology as point of reference. His theory of "hot and cold cognition" is intended to model a repre- sentation of an observer's way of cognizing causes of political events. By way of so called "Implication Molecules" Abelson tries to get a pre- cise attribution of causes to events. The "Implication Molecule" may be generally expressed as If X does Y and X causes Y then X wants Y and should require an operational ization of cause-event sequences as a basic mechanism and an associative relation between cause and event. 14 To illustrate associations between causes and events Abelson makes use of Schank's (1973) C-diagram. But while Schank represents state change Abelson's interest is to represent the event as fulfilment of a purpose. The significant difference between both is indicated with the representation of the verb "want" and illustrated in Figure 2: one % PTRANS ~ book^-l_7 ^^ one ♦ cf I 4 ■► pi eased E 4^ want t E (GO) +* poss book Figure 2. Different Conceptual Dependencies Abelson (1973, p. 293) comments on the difference saying that it is mainly due to the service of convenience than because of theoretical disagreement. This assertion must be regarded as an understatement. The convenience, namely, concerns the fact that Abelson uses this nota- tional system on an area of application, which Schank does not. The building block in the ideology machine. The implication molecule is built up of pairs of Action atoms (A), Purpose atoms (P), and State atoms (S). From their presentation we chose the following diagram for representing the types. © one H TRANS- X -«-["?" P - Y a Figure 3. The Action -Atom This exemplifies the Directive case, in which an object X is moved from a place (a) to a place (b). Y indicates that an instrument is necessary for the transition. 15 fs\ E 4* happy E 4"4 poss ( X) E 44 aqent (E; A ) Figure 4. The State-Atom The term "agent" has a special meaning in this system (see Fig. 1). When F accepts a proposal by E to perform a certain action he becomes the agent of E. An actor (E) may, of course, act as agent for his own sake, but when institutions or foreign countries are involved it is of special import to distinguish between E and F with respect to social contracts and political dependencies. As the example in Figure 5 shows, actors in the dependency may differ. So v ie t 44 want t Afqhanistan 44 calm Figure 5. The Purpose-Atom An implication molecule requires the following associations: 1. PS The S-atom is the state connected to the "want" in the P-atom 2. AS The A-atom is causally bonded to the S-atom 3. PA The actor in the A-atom is an agent (for action A) of the action in the P-atom. With these bonds a molecule representing a political "want" gets the di- agram shown in Figure 6. The molecular unit is the essential building block of all belief systems, 16 E F want / / ap t i o n G ^ 4 Cond i t ion Figure 6. Representation of a Political "Want" fulfilled a planning has to take place in order to connect several actions and states. A plan may be serial, called chain or called a tree when several S are necessary for an A. These types of trees are presented in Figure 7: The second tree in this example represents a plan in which the actor performs action A2 by the help of the prerequisite A1 S1 AO SO Figure 7. Examples of Tree structures S4 to produce S2, which, together with S1 and S3 makes possible action A1 leading to State 0. The plan becomes a network when, for example, the actor undertakes several actions concurrently or when there are multiple consequences. To realize a plan two "enablements" are needed: (1) instrumental control and (2) social contract. The first concerns the actor of S having access 17 to or possibility to use instruments used in A. The second concerns the S turning to actor of A being agent to prior actor in the chain. The nu- cleus of a plan is the action represented by the verb on language level. The states refer to knowledge and accesses which must be true for a plan to succeed, but which are often left implicit in informal discussion (Abelson, 1973, p. 310). Abelson does not concretize his outline with examples from some text material. Therefore, we would like to illustrate the model by testing it on the text material created for the purpose of studying "believes" and "wants" in Social Science Research. The text consists of interviews with those researchers about their perception and evaluation of their infor- mation search behaviour (Bierschenk, 1974). One interview question con- cerned the way by which information search could be improved. Thus, the person's "wants" in this domain can be accessed through their statements belonging to this question. The coding format gives access to all statements expressing "wants". Linguistically termed, the sentences shall have been coded as expressing finality or volition connected to an action related to a goal. The computer output from one person contains the following two sentences: (1) It would, of course, be good to get as much as possible of the reviews made so that one knows what to read. (2) There is not one person who would be able to make a good summary. In Abelson's sense, these molecules would express the possible beginning and end of a plan. Through these sentences it may be assumed that some strategy has been undertaken in between them showing the way to reach what the last sentence stands for. A search in the text shows that there are expressions of prerequisites as well as of action and purpose, in Abelson's sense. To make the term "Plan" concrete we present the piece of text together with A, P, and S symbols: A+P It would, of course, be good to get as much as possible of the review made so that one knows what to read. P One must come to a new evaluation of the researcher. S I am educated in the tradition of citation, you know. 18 New Evaluation of the researcher must come about E is educated in the trad i ton of citation E evaluates references from texts that is not read as bad manners The researcher cannot read original material The researcher should be able to trust other's summaries Good summaries of research is the goal Nobody can do it - the goal is not attainable E gets to know? ■^f P1 J E wants to know what to read Review should be made by someone (unspecified) Figure 8. Need for research information 19 S To make reference in the text to something you haven't read is very bad manners. S There is no chance for the researcher to read the original material, A but he must be able to trust what others have done.S+P There is not one person who would be able to cover all areas so well that he could manage to make a good summary With Abelson's symbols this plan is represented in Figure 8: Starting with the first sentence, the researcher, let us call him "Ego", wants to know what he should read to get the best possible information about his own field of inquiry. The final state, then, would be him having access to his "wants". However, the last sentence expresses that this goal has not yet been attained. Ego does not inform about his strategy of reaching the goal, but in- stead about the reason for his believes expressed as prerequisite in his cognitive structure. The chain within S1 contains Ego's experience. The evaluation of the researcher (P2) should be a trust (S5). The prerequi- sites lie within the Ego himself (S2, S3, S4). Since this represents the cognitive structure of Ego (overtly expressed), there is no A-atom for action and the chain is not a complete plan. Therefore, by a box it is marked as belonging to S1. The structure of S1 causes P1 to be ex- pressed. A drag on A1, however is S6 containing a P-S pair expressing that A1 cannot be performed the way Ego wants, because of this SO is not reached. Ego wants somebody (agent?) to take care of the problem. The A1S0-pair implies an unsolved information problem, a reason why a further control in the material was made to see if there was any information about E being able to do something himself. The information from the sentences after the ones presented sofar resulted in the graph displayed in Figure 9. 20 The solving of information problems E's insight E reads things Information is satisfactory E does things Figure 9. Representation of an Action Strategy This is namely, what this researcher further tells: "There is another solution to the information problem, of course. And this is simply to realize that either I have to spend my time reading things, or I have to spend it doing things. You can't do both." Evidently, Ego has two possible strategies. Which one he choses still does not emerge from the surface. We may terminate the discussion by stating that Ego seems frustrated, and that it is easy to infer a nega- tive evaluation of information accessibility from this imbalance. In Abelson's view, all belief systems have standardized knowledge up to the level of plans. The lower the level, the more common knowl- edge is represented. To distinguish one belief system from another, however, the level of script is necessary, because, without script no ideology exists. Scripts emerge by the categorisation of themes. Se- quences of themes involving the same actors, although with different dependencies (interdependencies) from one theme to another implies a script. The development of an event sequence may be exemplified with 21 the thematic changes of a script called "Revolution": T8 (E;F)--> T9 (E;F)— > T12 (E;F)--> T7 (E;F)— > T8 (F;E) The starting state is E dominating F. Thereafter events occur with re- bellic tendencies (T9) leading to conflict (T12). T7 means that the con- flict leads to victory for the dominated actor, and this new dependency relation is denoted by the changing roles (F;E). Actors may play roles in other's plans. They may give expression to positive and negative values of each other's plans. They may have influ- ence or not on each other's plans. A combination of these variables has led Abelson to build up a matrix as basis for a "taxonomy of themes". There are twelve symmetrical and asymmetrical themes. A symmetrical is, for example "Mutual Admiration" and an asymmetrical one "Victory". The basic premise by Abelson for this Ideational edifice is that a thorough control of the conceptual relations in different families of themes would give an indication of structural details in human thinking and the con- struction of social attitudes. Attitudinal Framing. The first step in the simulation is the judgment of a belief as regards its credibility and value balance. The balance is based on Abelson's intuitive understanding of tension and tension re- duction. The conceptual base is a vocabulary of "generic events" associ- ated with a certain political frame. With this view an event of a certain kind causes a particular behavioural response every time it occurs (habitualized behaviour). Similarly, the social (political) cognizer who be- comes familiar with defined categories of actors producing certain cate- gories of events assigns a value to the outcome depending on his valu- ation of the actor (himself, a related person, or a foreigner). This im- plies that the attitude toward a certain event valued on a positive scale may suddenly switch to a negative valuation depending on the attitudi- nal balance between actor and cognizer. This explains why an ideology machine answers "Yes" to the question whether China could have built the Berlin Wall (a question of credibility). Instrumental control is a weaker enablement than the frame of social contract. 22 Conclusions Existing methods for algorithmization rely on syntagmatic models, while the organization of data bases builds on principles for the con- struction of (1) facts, (2) hierarchies, (3) semantic nets, and (4) matri- ces. The immediate consequence of these principles is that the cognitive processes assumed to underly natural language are explained within the framework of a machinery of logics. Cognitive models developed on the basis of those prerequisites nec- essarily assume human cognition to be expressible through computer programs (Boden, 1977; Waterman & Newell, 1971), since it has been pos- sible to construct an isomorphic relation between symbolic logic and arithmetic. Through this isomorphism propositionai logics could be given the form of arithmetic procedures. As a consequence, a number of syllo- gisms could be computed through completely automatized procedures. This successful manipulation of logical formulas was further taken as a pretext for the hypothesis that humans should have computable knowl- edge about the adequate response. Typical for this circumstance is that everything is presented in discrete form, the essential characteristic of a digital computer's knowledge (Weizenbaum, 1976). To solve certain problems of complexity the research efforts within cognitive science have to a great extent been marked by an object orientation. Within this development, cognitve processes are explained with reference to "knowledge representation" through frames (Minsky, 1975). As a consequence, understanding is defined as operations suitable for a positional defined processing of data input. In agreement with this definition, text processing takes its point of departure in questions like: Are there properties and relations in a natural language that are suit- able for a machine to utilize when it shall perform certain specific tasks within a well-defined frame of reference? The answer to this kind of questions is presented as the "objective knowledge of the world", whose justification is that information processing builds on general principles on which a frame generator or Turing machine functions. The processing presupposes the presentation of facts about the world to be done in predicate-logics and to vary depending on logical complexity. Further it presupposes an extensive rule writing. Because of artificial constraints on surface features of a text, a recognition of simple or composite pat- 23 terns is sufficient. Moreover, the frame hypothesis allows the assumption of mutually exclusive paths to exist, which can be stored in the form of programs. If, with reference to a certain query, an appropriate path may be activated, it is assumed that the system shows "learning". This implies that mathematical-logical statements of associations between vari- ables can be given the form of arithmetic procedures leading to compu- tation being equalized with cognition. By the assumption of language expressions being formulated such that they reflect the individual's conceptual orientation the problems of how people form concepts has been governing the cognitive models that have their anchorage in semantic-logical assumptions. What is typical when conceptual information processing on the basis of natural language is coded through semantic markers has been demonstrated by Quillian's semantic nets. If intelligent machines shall be developed, some principles of organization should be employed that stipulate reference to meta- knowledge, which lies outside the syntactic-semantic-logical specification. Abelson's attempts to build an ideology machine has been discussed in order to show how concepts and conceptualizations differ when they are based on the classification of social events and the roles actors play within the context of social scenarios. As in any other social contract certain specific rules control that their actions conform with predetermined specifications. It follows that the cognitive mechanism operates on stereotypic situations, the ideology machine's way of avoiding or circumventing context. Moreover, when a system configuration builds on the frame hypothesis "learning" is mod- elled on the basis of already known information. Thus, a cognitive mech- anism of this type can only detect "similarity" and establish "boundary conditions". References Abelson, R. P. (1973). The structure of belief systems. In R.C. Schank & K.M. Colby (Eds.), Computer models of thought and language (pp. 287-338). San Francisco: Freeman. 24 Boden, M. A. (1977). Artificial intelligence and natural man. Hassocks, Sussex: Harvester Press. Bierschenk, B. (1974). Perception, strukturering och precisering av pedagogiska och psykologiska forskningsproblem pi pedagogiska institutioner i Svergie (Perception, structuring and definition of educational and psychological research problems at departments of education in Sweden) (Pedagogisk-psykoiogiska problem, No. 254). Malmo, Sweden: School of Education. Bierschenk, B. (1982). An ecological model for the processing of symbolic information. Perceptual and Motor Skills, 54, 663-674. (ERIC Document Reproduction Service No. EJ 267 771, TM 507 169) Bierschenk, B. (1984 a). Steering mechanisms for knowability (Kognitionsvetenskaplig forskning, No. 1). Lund, Sweden: Lund University, Department of Psychology. (ERIC Document Reproduc- tion Service No. ED 264 246, TM 850 437) Bierschenk, B. (1984 b). The split between meaning and being (Kognitionsvetenskaplig forskning, No. 3). Lund, Sweden: Lund University, Department of Psychology. Bierschenk, B. (1986). The cult of understanding (Kognitionsvetenskaplig forskning, No. 15). Lund, Sweden: Lund University, Department of Psychology. (ERIC Document Reproduction Service No. ED 295 944, TM 011 263) Carnap, R. (1945). Two concepts of probability. Philosophy and Phe- nomenological Research, 5, 513-532. Collins, A. M., & Quillian, M. R. (1969). Retrieval time from semantic mem- ory. Journal of Verbal Learning and Verbal Behavior, 8, 240-247. Conrad, C. (1972). Cognitive economy in semantic memory. Journal of Ex- perimental Psychology, 92, 149-154. 25 Erickson, T., & Mattson, M. E. (1981). From words to meaning: A semantic illusion. Journal of Verbal Learning and Verbal Behavior, 20, 540- 551. Glass, A. L., & Holyoak, K. J. (1974). Alternative conceptions of semantic memory. Cognition, 3, 313-339. Katz, J. J., & Fodor, J. A. (1964). The structure of a semantic theory. In J. A. Fodor & J. J. Katz (Eds.), The structure of language (pp 479-581). Englewood Cliffs, NJ: Prentice-Hall. McCloskey, M. E., & Glucksberg, S. (1978). Natural categories: Well de- fined or fuzzy sets. Memory and Cognition, 6, 462-472. Minsky, M. (1975). A framework for representing knowledge. In P. A. Winston (Ed.), The psychology of computer vision (pp. 217-277). New York: McGraw-Hill. Quillian, M. R. (1968). Semantic memory. In M. Minsky (Ed.), Semantic in- formation processing (216-270). Cambridge, MA: Cambridge Univer- sity Press. Rips, L. J., Shoben, E. J., & Smith, E. E. (1973). Semantic distance and the verification of semantic relations. Journal of Verbal Learning and Verbal Behavior, 12, 1-20. Schank, R. C. (1973). Identification of conceptualizations underlying nat- ural language understanding. In R.C. Schank & K.M. Colby (Eds.), Computer models of thought and language (pp. 187-247). San Fran- cisco: Freeman. Waterman, D. A., & Newell, A. (1971). A protocol analysis as a task for artificial intelligence. Artificial Intelligence, 2, 285-318. 26 Weizenbaum, J. (1976). Computer power and human reason: From judg- ment to calculation. San Francisco: Freeman.